What is streamroller?
The streamroller npm package is a file-based logging utility designed to help manage log files by supporting automatic rolling of logs based on size or date. It is particularly useful for applications that generate a lot of log data and need to manage disk space efficiently.
What are streamroller's main functionalities?
Rolling file streams based on size
This feature allows you to create a log file that rolls over when it reaches a certain size. In the example, a new log file is created when the current log file reaches 10 MB. The '3' indicates that a maximum of three backup files are kept.
const StreamRoller = require('streamroller');
const stream = new StreamRoller.RollingFileStream('example.log', 1024 * 1024 * 10, 3);
stream.write('This is a log entry');
stream.end();
Rolling file streams based on date
This feature allows for log files to be rolled over based on date patterns. The 'yyyy-MM-dd' pattern means the log file will roll over daily. The 'daysToKeep' option specifies that logs older than 10 days should be deleted.
const StreamRoller = require('streamroller');
const stream = new StreamRoller.DateRollingFileStream('example.log', 'yyyy-MM-dd', { daysToKeep: 10 });
stream.write('This is a log entry');
stream.end();
Other packages similar to streamroller
winston
Winston is a multi-transport async logging library for Node.js. Similar to streamroller, it supports file-based logging with log rotation, but it also offers more flexibility with multiple logging transports like console, file, and HTTP, and it supports custom log levels.
bunyan
Bunyan is a simple and fast JSON logging library for Node.js services. Like streamroller, it supports log rotation, but it focuses on JSON log entries and provides a more structured logging solution that is ideal for large-scale applications.
streamroller
node.js file streams that roll over when they reach a maximum size, or a date/time.
npm install streamroller
usage
var rollers = require('streamroller');
var stream = new rollers.RollingFileStream('myfile', 1024, 3);
stream.write("stuff");
stream.end();
The streams behave the same as standard node.js streams, except that when certain conditions are met they will rename the current file to a backup and start writing to a new file.
new RollingFileStream(filename [, maxSize, numBackups, options])
filename
<string>maxSize
<integer> - defaults to 0
- the size in bytes to trigger a rollover. If not specified or 0, then no log rolling will happen.numBackups
<integer> - defaults to 1
- the number of old files to keep (excluding the hot file)options
<Object>
encoding
<string> - defaults to 'utf8'
mode
<integer> - defaults to 0o600
(see node.js file modes)flags
<string> - defaults to 'a'
(see node.js file flags)compress
<boolean> - defaults to false
- compress the backup files using gzip (backup files will have .gz
extension)keepFileExt
<boolean> - defaults to false
- preserve the file extension when rotating log files (file.log
becomes file.1.log
instead of file.log.1
).fileNameSep
<string> - defaults to '.'
- the filename separator when rolling. e.g.: abc.log.
1 or abc.
1.log (keepFileExt)
This returns a WritableStream
. When the current file being written to (given by filename
) gets up to or larger than maxSize
, then the current file will be renamed to filename.1
and a new file will start being written to. Up to numBackups
of old files are maintained, so if numBackups
is 3 then there will be 4 files:
filename
filename.1
filename.2
filename.3
When filename size >= maxSize then:
filename -> filename.1
filename.1 -> filename.2
filename.2 -> filename.3
filename.3 gets overwritten
filename is a new file
new DateRollingFileStream(filename [, pattern, options])
filename
<string>pattern
<string> - defaults to yyyy-MM-dd
- the date pattern to trigger rolling (see below)options
<Object>
encoding
<string> - defaults to 'utf8'
mode
<integer> - defaults to 0o600
(see node.js file modes)flags
<string> - defaults to 'a'
(see node.js file flags)compress
<boolean> - defaults to false
- compress the backup files using gzip (backup files will have .gz
extension)keepFileExt
<boolean> - defaults to false
- preserve the file extension when rotating log files (file.log
becomes file.2017-05-30.log
instead of file.log.2017-05-30
).fileNameSep
<string> - defaults to '.'
- the filename separator when rolling. e.g.: abc.log.
2013-08-30 or abc.
2013-08-30.log (keepFileExt)alwaysIncludePattern
<boolean> - defaults to false
- extend the initial file with the patterndaysToKeep
numBackups
<integer> - defaults to 1
- the number of old files that matches the pattern to keep (excluding the hot file)maxSize
<integer> - defaults to 0
- the size in bytes to trigger a rollover. If not specified or 0, then no log rolling will happen.
This returns a WritableStream
. When the current time, formatted as pattern
, changes then the current file will be renamed to filename.formattedDate
where formattedDate
is the result of processing the date through the pattern, and a new file will begin to be written. Streamroller uses date-format to format dates, and the pattern
should use the date-format format. e.g. with a pattern
of "yyyy-MM-dd"
, and assuming today is August 29, 2013 then writing to the stream today will just write to filename
. At midnight (or more precisely, at the next file write after midnight), filename
will be renamed to filename.2013-08-29
and a new filename
will be created. If options.alwaysIncludePattern
is true, then the initial file will be filename.2013-08-29
and no renaming will occur at midnight, but a new file will be written to with the name filename.2013-08-30
. If maxSize
is populated, when the current file being written to (given by filename
) gets up to or larger than maxSize
, then the current file will be renamed to filename.pattern.1
and a new file will start being written to. Up to numBackups
of old files are maintained, so if numBackups
is 3 then there will be 4 files:
filename
filename.20220131.1
filename.20220131.2
filename.20220131.3